ethical agent
AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps
Gao, Di Kevin, Haverly, Andrew, Mittal, Sudip, Wu, Jiming, Chen, Jingdao
Artificial intelligence (AI) ethics has emerged as a burgeoning yet pivotal area of scholarly research. This study conducts a comprehensive bibliometric analysis of the AI ethics literature over the past two decades. The analysis reveals a discernible tripartite progression, characterized by an incubation phase, followed by a subsequent phase focused on imbuing AI with human-like attributes, culminating in a third phase emphasizing the development of human-centric AI systems. After that, they present seven key AI ethics issues, encompassing the Collingridge dilemma, the AI status debate, challenges associated with AI transparency and explainability, privacy protection complications, considerations of justice and fairness, concerns about algocracy and human enfeeblement, and the issue of superintelligence. Finally, they identify two notable research gaps in AI ethics regarding the large ethics model (LEM) and AI identification and extend an invitation for further scholarly research.
- North America > United States > Mississippi (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- (7 more...)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Media (1.00)
- (7 more...)
Digital me ontology and ethics
Kocarev, Ljupco, Koteska, Jasna
This paper addresses ontology and ethics of an AI agent called digital me. We define digital me as autonomous, decision-making, and learning agent, representing an individual and having practically immortal own life. It is assumed that digital me is equipped with the big-five personality model, ensuring that it provides a model of some aspects of a strong AI: consciousness, free will, and intentionality. As computer-based personality judgments are more accurate than those made by humans, digital me can judge the personality of the individual represented by the digital me, other individuals' personalities, and other digital me-s. We describe seven ontological qualities of digital me: a) double-layer status of Digital Being versus digital me, b) digital me versus real me, c) mind-digital me and body-digital me, d) digital me versus doppelganger (shadow digital me), e) non-human time concept, f) social quality, g) practical immortality. We argue that with the advancement of AI's sciences and technologies, there exist two digital me thresholds. The first threshold defines digital me having some (rudimentarily) form of consciousness, free will, and intentionality. The second threshold assumes that digital me is equipped with moral learning capabilities, implying that, in principle, digital me could develop their own ethics which significantly differs from human's understanding of ethics. Finally we discuss the implications of digital me metaethics, normative and applied ethics, the implementation of the Golden Rule in digital me-s, and we suggest two sets of normative principles for digital me: consequentialist and duty based digital me principles.
- North America > United States > New York (0.04)
- Europe > North Macedonia > Skopje Statistical Region > Skopje Municipality > Skopje (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Leisure & Entertainment > Games > Chess (1.00)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.93)
- Law (0.93)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Ontologies (0.62)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.49)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.48)
Responses to a Critique of Artificial Moral Agents
Poulsen, Adam, Anderson, Michael, Anderson, Susan L., Byford, Ben, Fossa, Fabio, Neely, Erica L., Rosas, Alejandro, Winfield, Alan
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Connecticut (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (14 more...)
- Summary/Review (1.00)
- Research Report > Experimental Study (0.67)
- Instructional Material > Course Syllabus & Notes (0.46)
- Research Report > New Finding (0.45)
- Law (1.00)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.45)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
Why Ethical Robots Might Not Be Such a Good Idea After All
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. This week my colleague Dieter Vanderelst presented our paper: "The Dark Side of Ethical Robots" at AIES 2018 in New Orleans. I blogged about Dieter's very elegant experiment here, but let me summarize. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot's logic it is transformed into a distinctly unethical robot--behaving either competitively or aggressively toward the proxy human.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > United Kingdom > England > Bristol (0.05)
The Case for Explicit Ethical Agents
Scheutz, Matthias (Tufts University)
Morality is a fundamentally human trait which permeates all levels of human society, from basic etiquette and normative expectations of social groups, to formalized legal principles upheld by societies. Hence, future interactive AI systems, in particular, cognitive systems on robots deployed in human settings, will have to meet human normative expectations, for otherwise these system risk causing harm. While the interest in “machine ethics” has increased rapidly in recent years, there are only very few current efforts in the cognitive systems community to investigate moral and ethical reasoning. And there is currently no cognitive architecture that has even rudimentary moral or ethical competence, i.e., the ability to judge situations based on moral principles such as norms and values and make morally and ethically sound decisions. We hence argue for the urgent need to instill moral and ethical competence in all cognitive system intended to be employed in human social contexts.
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
- (7 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
The Nature, Importance, and Difficulty of Machine Ethics
Machine ethics has a broad range of possible implementations in computer technology--from maintaining detailed records in hospital databases to overseeing emergency team movements after a disaster. From a machine ethics perspective, you can look at machines as ethical-impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents. A current research challenge is to develop machines that are explicit ethical agents. This research is important, but accomplishing this goal will be extremely difficult without a better understanding of ethics and of machine learning and cognition. This article is part of a special issue on Machine Ethics.
Machine Ethics: Creating an Ethical Intelligent Agent
Anderson, Michael, Anderson, Susan Leigh
The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics -- which has traditionally focused on ethical issues surrounding humans' use of machines -- machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- (11 more...)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
- Government > Military (0.46)